Eliminating graphs by means of parallel knock-out schemes
نویسندگان
چکیده
منابع مشابه
Eliminating graphs by means of parallel knock-out schemes
In 1997 Lampert and Slater introduced parallel knock-out schemes, an iterative process on graphs that goes through several rounds. In each round of this process, every vertex eliminates exactly one of its neighbors. The parallel knock-out number of a graph is the minimum number of rounds after which all vertices have been eliminated (if possible). The parallel knock-out number is related to wel...
متن کامل0333 - 3590 Parallel knock - out schemes in networks
We consider parallel knock-out schemes, a procedure on graphs introduced by Lampert and Slater in 1997 in which each vertex eliminates exactly one of its neighbors in each round. We are considering cases in which after a finite number of rounds, where the minimimum number is called the parallel knock-out number, no vertices of the graph are left. We derive a number of combinatorial and algorith...
متن کاملThe Computational Complexity of the Parallel Knock-Out Problem
We consider computational complexity questions related to parallel knock-out schemes for graphs. In such schemes, in each round, each remaining vertex of a given graph eliminates exactly one of its neighbours. We show that the problem of whether, for a given graph, such a scheme can be found that eliminates every vertex is NP-complete. Moreover, we show that, for all fixed positive integers k ≥...
متن کاملUpper Bounds and Algorithms for Parallel Knock-Out Numbers
We study parallel knock-out schemes for graphs. These schemes proceed in rounds in each of which each surviving vertex simultaneously eliminates one of its surviving neighbours; a graph is reducible if such a scheme can eliminate every vertex in the graph. We resolve the square-root conjecture, first posed at MFCS 2004, by showing that for a reducible graph G, the minimum number of required rou...
متن کاملRegularization Through Feature Knock Out
In this paper, we present and analyze a novel regularization technique based on enhancing our dataset with corrupted copies of the original data. The motivation is that since the learning algorithm lacks information about which parts of the data are reliable, it has to produce more robust classification functions. We then demonstrate how this regularization leads to redundancy in the resulting ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Discrete Applied Mathematics
سال: 2007
ISSN: 0166-218X
DOI: 10.1016/j.dam.2006.04.034